89 research outputs found

    Online Discrimination of Nonlinear Dynamics with Switching Differential Equations

    Full text link
    How to recognise whether an observed person walks or runs? We consider a dynamic environment where observations (e.g. the posture of a person) are caused by different dynamic processes (walking or running) which are active one at a time and which may transition from one to another at any time. For this setup, switching dynamic models have been suggested previously, mostly, for linear and nonlinear dynamics in discrete time. Motivated by basic principles of computations in the brain (dynamic, internal models) we suggest a model for switching nonlinear differential equations. The switching process in the model is implemented by a Hopfield network and we use parametric dynamic movement primitives to represent arbitrary rhythmic motions. The model generates observed dynamics by linearly interpolating the primitives weighted by the switching variables and it is constructed such that standard filtering algorithms can be applied. In two experiments with synthetic planar motion and a human motion capture data set we show that inference with the unscented Kalman filter can successfully discriminate several dynamic processes online

    Kick-starting GPLVM Optimization via a Connection to Metric MDS

    Get PDF
    The Gaussian Process Latent Variable Model (GPLVM) is an attractive model for dimensionality reduction, but the optimization of the GPLVM likelihood with respect to the latent point locations is difficult, and prone to local optima. Here we start from the insight that in the GPLVM, we should have that , where is the kernel function evaluated at latent points and , and is the corresponding estimate from the data. For an isotropic covariance function this relationship can be inverted to yield an estimate of the interpoint distances in the latent space, and these can be fed into a multidimensional scaling (MDS) algorithm. This yields an initial estimate of the latent locations, which can be subsequently optimized in the usual GPLVM fashion. We compare two variants of this approach to the standard PCA initialization and to the ISOMAP algorithm, and show that our initialization converges to the best GPLVM likelihoods on all six tested motion capture data sets

    Nonlinear Dimensionality Reduction for Motion Synthesis and Control

    Get PDF
    Synthesising motion of human character animations or humanoid robots is vastly complicated by the large number of degrees of freedom in their kinematics. Control spaces become so large, that automated methods designed to adaptively generate movements become computationally infeasible or fail to find acceptable solutions. In this thesis we investigate how demonstrations of previously successful movements can be used to inform the production of new movements that are adapted to new situations. In particular, we evaluate the use of nonlinear dimensionality reduction techniques to find compact representations of demonstrations, and investigate how these can simplify the synthesis of new movements. Our focus lies on the Gaussian Process Latent Variable Model (GPLVM), because it has proven to capture the nonlinearities present in the kinematics of robots and humans. We present an in-depth analysis of the underlying theory which results in an alternative approach to initialise the GPLVM based on Multidimensional Scaling. We show that the new initialisation is better suited than PCA for nonlinear, synthetic data, but have to note that its advantage shrinks on motion data. Subsequently we show that the incorporation of additional structure constraints leads to low-dimensional representations which are sufficiently regular so that once learned dynamic movement primitives can be adapted to new situations without need for relearning. Finally, we demonstrate in a number of experiments where movements are generated for bimanual reaching, that, through the use of nonlinear dimensionality reduction, reinforcement learning can be scaled up to optimise humanoid movements

    Latent Spaces for Dynamic Movement Primitives

    Get PDF
    Dynamic movement primitives (DMPs) have been proposed as a powerful, robust and adaptive tool for planning robot trajectories based on demonstrated example movements. Adaptation of DMPs to new task requirements becomes difficult when demonstrated trajectories are only available in joint space, because their parameters do not in general correspond to variables meaningful for the task. This problem becomes more severe with increasing number of degrees of freedom and hence is particularly an issue for humanoid movements. It has been shown that DMP parameters can directly relate to task variables, when DMPs are learned in latent spaces resulting from dimensionality reduction of demonstrated trajectories. As we show here, however, standard dimensionality reduction techniques do not in general provide adequate latent spaces which need to be highly regular. In this work we concentrate on learning discrete (point-topoint) movements and propose a modification of a powerful nonlinear dimensionality reduction technique (Gaussian Process Latent Variable Model). Our modification makes the GPLVM more suitable for the use of DMPs by favouring latent spaces with highly regular structure. Even though in this case the user has to provide a structure hypothesis we show that its precise choice is not important in order to achieve good results. Additionally, we can overcome one of the main disadvantages of the GPLVM with this modification: its dependence on the initialisation of the latent space. We motivate our approach on data from a 7-DoF robotic arm and demonstrate its feasibility on a high-dimensional human motion capture data set

    Using Dimensionality Reduction to Exploit Constraints in Reinforcement Learning

    Get PDF
    Reinforcement learning in the high-dimensional, continuous spaces typical in robotics, remains a challenging problem. To overcome this challenge, a popular approach has been to use demonstrations to find an appropriate initialisation of the policy in an attempt to reduce the number of iterations needed to find a solution. Here, we present an alternative way to incorporate prior knowledge from demonstrations of individual postures into learning, by extracting the inherent problem structure to find an efficient state representation. In particular, we use probabilistic, nonlinear dimensionality reduction to capture latent constraints present in the data. By learning policies in the learnt latent space, we are able to solve the planning problem in a reduced space that automatically satisfies task constraints. As shown in our experiments, this reduces the exploration needed and greatly accelerates the learning. We demonstrate our approach for learning a bimanual reaching task on the 19-DOF KHR-1HV humanoid

    Does Dimensionality Reduction improve the Quality of Motion Interpolation?

    Get PDF
    In recent years nonlinear dimensionality reduction has frequently been suggested for the modelling of high-dimensional motion data. While it is intuitively plausible to use dimensionality reduction to recover low dimensional manifolds which compactly represent a given set of movements, there is a lack of critical investigation into the quality of resulting representations, in particular with respect to generalisability. Furthermore it is unclear how consistently particular methods can achieve good results. Here we use a set of robotic motion data for which we know the ground truth to evaluate a range of nonlinear dimensionality reduction methods with respect to the quality of motion interpolation. We show that results are extremely sensitive to parameter settings and data set used, but that dimensionality reduction can potentially improve the quality of linear motion interpolation, in particular in the presence of noise

    Representation of Perceptual Evidence in the Human Brain Assessed by Fast, Within-Trial Dynamic Stimuli

    Get PDF
    Bitzer S, Park H, Maess B, von Kriegstein K, Kiebel SJ. Representation of Perceptual Evidence in the Human Brain Assessed by Fast, Within-Trial Dynamic Stimuli. Frontiers in Human Neuroscience. 2020;14: 9.In perceptual decision making the brain extracts and accumulates decision evidence from a stimulus over time and eventually makes a decision based on the accumulated evidence. Several characteristics of this process have been observed in human electrophysiological experiments, especially an average build-up of motor-related signals supposedly reflecting accumulated evidence, when averaged across trials. Another recently established approach to investigate the representation of decision evidence in brain signals is to correlate the within-trial fluctuations of decision evidence with the measured signals. We here report results of this approach for a two-alternative forced choice reaction time experiment measured using magnetoencephalography (MEG) recordings. Our results show: (1) that decision evidence is most strongly represented in the MEG signals in three consecutive phases and (2) that posterior cingulate cortex is involved most consistently, among all brain areas, in all three of the identified phases. As most previous work on perceptual decision making in the brain has focused on parietal and motor areas, our findings therefore suggest that the role of the posterior cingulate cortex in perceptual decision making may be currently underestimated

    Generic Decoding of Restricted Errors

    Full text link
    Several recently proposed code-based cryptosystems base their security on a slightly generalized version of the classical (syndrome) decoding problem. Namely, in the so-called restricted (syndrome) decoding problem, the error values stem from a restricted set. In this paper, we propose new generic decoders, that are inspired by subset sum solvers and tailored to the new setting. The introduced algorithms take the restricted structure of the error set into account in order to utilize the representation technique efficiently. This leads to a considerable decrease in the security levels of recently published code-based cryptosystems

    Zero Knowledge Protocols and Signatures from the Restricted Syndrome Decoding Problem

    Get PDF
    The Restricted Syndrome Decoding Problem (R-SDP) cor- responds to the Syndrome Decoding Problem (SDP) with the additional constraint that entries of the solution vector must live in a desired sub- set of a finite field. In this paper we study how this problem can be applied to the construction of signatures derived from Zero-Knowledge (ZK) proofs. First, we show that R-SDP appears to be well suited for this type of applications: almost all ZK protocols relying on SDP can be modified to use R-SDP, with important reductions in the communication cost. Then, we describe how R-SDP can be further specialized, so that solutions can be represented with a number of bits that is slightly larger than the security parameter (which clearly provides an ultimate lower bound), thus enabling the design of ZK protocols with tighter and rather competitive parameters. Finally, we show that existing ZK protocols can greatly benefit from the use of R-SDP, achieving signature sizes in the order of 7 kB, which are smaller than those of several other schemes ob- tained from ZK protocols. For instance, this beats all schemes based on the Permuted Kernel Problem (PKP), almost all schemes based on SDP and several schemes based on rank metric problems
    • …
    corecore